Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
J Dent ; 140: 104793, 2024 01.
Artículo en Inglés | MEDLINE | ID: mdl-38016620

RESUMEN

OBJECTIVES: We aimed to understand how artificial intelligence (AI) influences dentists by comparing their gaze behavior when using versus not using an AI software to detect primary proximal carious lesions on bitewing radiographs. METHODS: 22 dentists assessed a median of 18 bitewing images resulting in 170 datasets from dentists without AI and 179 datasets from dentists with AI, after excluding data with poor gaze recording quality. We compared time to first fixation, fixation count, average fixation duration, and fixation frequency between both trial groups. Analyses were performed for the entire image and stratified by (1) presence of carious lesions and/or restorations and (2) lesion depth (E1/2: outer/inner enamel; D1-3 outer-inner third of dentin). We also compared the transitional pattern of the dentists' gaze between the trial groups. RESULTS: Median time to first fixation was shorter in all groups of teeth for dentists with AI versus without AI, although p>0.05. Dentists with AI had more fixations (median=68, IQR=31, 116) on teeth with restorations compared to dentists without AI (median=47, IQR=19, 100), p = 0.01. In turn, average fixation duration was longer on teeth with caries for the dentists with AI than those without AI; although p>0.05. The visual search strategy employed by dentists with AI was less systematic with a lower proportion of lateral tooth-wise transitions compared to dentists without AI. CONCLUSIONS: Dentists with AI exhibited more efficient viewing behavior compared to dentists without AI, e.g., lesser time taken to notice caries and/or restorations, more fixations on teeth with restorations, and fixating for shorter durations on teeth without carious lesions and/or restorations. CLINICAL SIGNIFICANCE: Analysis of dentists' gaze patterns while using AI-generated annotations of carious lesions demonstrates how AI influences their data extraction methods for dental images. Such insights can be exploited to improve, and even customize, AI-based diagnostic tools, thus reducing the dentists' extraneous attentional processing and allowing for more thorough examination of other image areas.


Asunto(s)
Inteligencia Artificial , Caries Dental , Humanos , Susceptibilidad a Caries Dentarias , Restauración Dental Permanente , Pautas de la Práctica en Odontología , Caries Dental/diagnóstico por imagen , Caries Dental/patología , Odontólogos
2.
J Dent ; 135: 104585, 2023 08.
Artículo en Inglés | MEDLINE | ID: mdl-37301462

RESUMEN

OBJECTIVES: Understanding dentists' gaze patterns on radiographs may allow to unravel sources of their limited accuracy and develop strategies to mitigate them. We conducted an eye tracking experiment to characterize dentists' scanpaths and thus their gaze patterns when assessing bitewing radiographs to detect primary proximal carious lesions. METHODS: 22 dentists assessed a median of nine bitewing images each, resulting in 170 datasets after excluding data with poor quality of gaze recording. Fixation was defined as an area of attentional focus related to visual stimuli. We calculated time to first fixation, fixation count, average fixation duration, and fixation frequency. Analyses were performed for the entire image and stratified by (1) presence of carious lesions and/or restorations and (2) lesion depth (E1/2: outer/inner enamel; D1-3: outer-inner third of dentin). We also examined the transitional nature of the dentists' gaze. RESULTS: Dentists had more fixations on teeth with lesions and/or restorations (median=138 [interquartile range=87, 204]) than teeth without them (32 [15, 66]), p<0.001. Notably, teeth with lesions had longer fixation durations (407 milliseconds [242, 591]) than those with restorations (289 milliseconds [216, 337]), p<0.001. Time to first fixation was longer for teeth with E1 lesions (17,128 milliseconds [8813, 21,540]) than lesions of other depths (p = 0.049). The highest number of fixations were on teeth with D2 lesions (43 [20, 51]) and lowest on teeth with E1 lesions (5 [1, 37]), p<0.001. Generally, a systematic tooth-by-tooth gaze pattern was observed. CONCLUSIONS: As hypothesized, while visually inspecting bitewing radiographic images, dentists employed a heightened focus on certain image features/areas, relevant to the assigned task. Also, they generally examined the entire image in a systematic tooth-by-tooth pattern.


Asunto(s)
Caries Dental , Dentina , Humanos , Dentina/patología , Radiografía de Mordida Lateral , Caries Dental/patología , Esmalte Dental/patología , Odontólogos , Pautas de la Práctica en Odontología
3.
Diagnostics (Basel) ; 12(8)2022 Aug 14.
Artículo en Inglés | MEDLINE | ID: mdl-36010318

RESUMEN

The detection and classification of cystic lesions of the jaw is of high clinical relevance and represents a topic of interest in medical artificial intelligence research. The human clinical diagnostic reasoning process uses contextual information, including the spatial relation of the detected lesion to other anatomical structures, to establish a preliminary classification. Here, we aimed to emulate clinical diagnostic reasoning step by step by using a combined object detection and image segmentation approach on panoramic radiographs (OPGs). We used a multicenter training dataset of 855 OPGs (all positives) and an evaluation set of 384 OPGs (240 negatives). We further compared our models to an international human control group of ten dental professionals from seven countries. The object detection model achieved an average precision of 0.42 (intersection over union (IoU): 0.50, maximal detections: 100) and an average recall of 0.394 (IoU: 0.50-0.95, maximal detections: 100). The classification model achieved a sensitivity of 0.84 for odontogenic cysts and 0.56 for non-odontogenic cysts as well as a specificity of 0.59 for odontogenic cysts and 0.84 for non-odontogenic cysts (IoU: 0.30). The human control group achieved a sensitivity of 0.70 for odontogenic cysts, 0.44 for non-odontogenic cysts, and 0.56 for OPGs without cysts as well as a specificity of 0.62 for odontogenic cysts, 0.95 for non-odontogenic cysts, and 0.76 for OPGs without cysts. Taken together, our results show that a combined object detection and image segmentation approach is feasible in emulating the human clinical diagnostic reasoning process in classifying cystic lesions of the jaw.

4.
Diagnostics (Basel) ; 12(7)2022 Jun 23.
Artículo en Inglés | MEDLINE | ID: mdl-35885432

RESUMEN

We aimed to assess the effects of hyperparameter tuning and automatic image augmentation for deep learning-based classification of orthodontic photographs along the Angle classes. Our dataset consisted of 605 images of Angle class I, 1038 images of class II, and 408 images of class III. We trained ResNet architectures for classification of different combinations of learning rate and batch size. For the best combination, we compared the performance of models trained with and without automatic augmentation using 10-fold cross-validation. We used GradCAM to increase explainability, which can provide heat maps containing the salient areas relevant for the classification. The best combination of hyperparameters yielded a model with an accuracy of 0.63-0.64, F1-score 0.61-0.62, sensitivity 0.59-0.65, and specificity 0.80-0.81. For all metrics, it was apparent that there was an ideal corridor of batch size and learning rate combinations; smaller learning rates were associated with higher classification performance. Overall, the performance was highest for learning rates of around 1-3 × 10-6 and a batch size of eight, respectively. Additional automatic augmentation improved all metrics by 5-10% for all metrics. Misclassifications were most common between Angle classes I and II. GradCAM showed that the models employed features relevant for human classification, too. The choice of hyperparameters drastically affected the performance of deep learning models in orthodontics, and automatic image augmentation resulted in further improvements. Our models managed to classify the dental sagittal occlusion along Angle classes based on digital intraoral photos.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...